33 research outputs found
Adaptive sparse coding and dictionary selection
Grant no. D000246/1.The sparse coding is approximation/representation of signals with the minimum number of
coefficients using an overcomplete set of elementary functions. This kind of approximations/
representations has found numerous applications in source separation, denoising, coding and
compressed sensing. The adaptation of the sparse approximation framework to the coding
problem of signals is investigated in this thesis. Open problems are the selection of appropriate
models and their orders, coefficient quantization and sparse approximation method. Some of
these questions are addressed in this thesis and novel methods developed. Because almost all
recent communication and storage systems are digital, an easy method to compute quantized
sparse approximations is introduced in the first part.
The model selection problem is investigated next. The linear model can be adapted to better
fit a given signal class. It can also be designed based on some a priori information about the
model. Two novel dictionary selection methods are separately presented in the second part
of the thesis. The proposed model adaption algorithm, called Dictionary Learning with the
Majorization Method (DLMM), is much more general than current methods. This generality
allowes it to be used with different constraints on the model. Particularly, two important cases
have been considered in this thesis for the first time, Parsimonious Dictionary Learning (PDL)
and Compressible Dictionary Learning (CDL). When the generative model order is not given,
PDL not only adapts the dictionary to the given class of signals, but also reduces the model
order redundancies. When a fast dictionary is needed, the CDL framework helps us to find a
dictionary which is adapted to the given signal class without increasing the computation cost
so much.
Sometimes a priori information about the linear generative model is given in format of a parametric
function. Parametric Dictionary Design (PDD) generates a suitable dictionary for sparse
coding using the parametric function. Basically PDD finds a parametric dictionary with a minimal
dictionary coherence, which has been shown to be suitable for sparse approximation and
exact sparse recovery.
Theoretical analyzes are accompanied by experiments to validate the analyzes. This research
was primarily used for audio applications, as audio can be shown to have sparse structures.
Therefore, most of the experiments are done using audio signals
Fine-Grained MRI Reconstruction Using Attentive Selection Generative Adversarial Networks
Compressed sensing (CS) leverages the sparsity prior to provide the
foundation for fast magnetic resonance imaging (fastMRI). However, iterative
solvers for ill-posed problems hinder their adaption to time-critical
applications. Moreover, such a prior can be neither rich to capture complicated
anatomical structures nor applicable to meet the demand of high-fidelity
reconstructions in modern MRI. Inspired by the state-of-the-art methods in
image generation, we propose a novel attention-based deep learning framework to
provide high-quality MRI reconstruction. We incorporate large-field contextual
feature integration and attention selection in a generative adversarial network
(GAN) framework. We demonstrate that the proposed model can produce superior
results compared to other deep learning-based methods in terms of image
quality, and relevance to the MRI reconstruction in an extremely low sampling
rate diet.Comment: 5 pages, 2 figures, 1 table, 22 reference
DeepMP for Non-Negative Sparse Decomposition
Non-negative signals form an important class of sparse signals. Many
algorithms have already beenproposed to recover such non-negative
representations, where greedy and convex relaxed algorithms are among the most
popular methods. The greedy techniques are low computational cost algorithms,
which have also been modified to incorporate the non-negativity of the
representations. One such modification has been proposed for Matching Pursuit
(MP) based algorithms, which first chooses positive coefficients and uses a
non-negative optimisation technique that guarantees the non-negativity of the
coefficients. The performance of greedy algorithms, like all non-exhaustive
search methods, suffer from high coherence with the linear generative model,
called the dictionary. We here first reformulate the non-negative matching
pursuit algorithm in the form of a deep neural network. We then show that the
proposed model after training yields a significant improvement in terms of
exact recovery performance, compared to other non-trained greedy algorithms,
while keeping the complexity low